The dangers of emotionless AI
AI excels at efficiency but lacks empathy, raising serious concerns
- By Gurmehar --
- Monday, 01 Sep, 2025
AI systems are often called “intelligent,” but they do not think like humans. They do not understand fairness, justice, or nuance. They recognise patterns in data, nothing more. And those patterns come from the past—messy, biased, and imperfect. Yet, we ask AI to make high-stakes decisions about credit, healthcare, jobs, education, parole, and even citizenship. The danger is that these systems can amplify human biases while hiding behind a veneer of efficiency.
I’ve spent years working with technology that scales and transforms businesses. I’ve been in countless late-night “war rooms,” building systems that promised speed, accuracy, and efficiency. Over time, I realised a disturbing truth: in our race to optimise, we traded judgment for convenience. And now, AI is making choices that we may not fully understand or are unwilling to own.
A vivid example came when I reviewed an AI triage system designed to flag serious internal policy violations. On paper, it was a success: faster decisions, fewer bottlenecks, consistent metrics. But something felt wrong. The system’s decisions were based on proxy variables nobody fully understood. Cases flagged by junior employees in certain centres were routinely deprioritised—not out of malice, but because the model reflected patterns in historical data. We had optimised the wrong things. The model looked efficient, but it lacked judgment.
I paused the rollout, retrained the system, and regained some confidence—but the incident left me uneasy. We may be teaching AI to make decisions we are no longer willing to question ourselves. Accuracy is not morality.
The quiet erosion of responsibility
AI does not have intent. It cannot care about ethics. But it enables a phenomenon I call moral outsourcing. Teams build models to simplify hard decisions. Stakeholders rely on the model’s recommendations. When decisions feel uncomfortable, no one wants to intervene. And when problems arise, people say, “That’s what the model said.” In this way, humans can abdicate responsibility while letting AI carry the consequences.
Even well-intentioned AI can cause harm. Credit systems may unintentionally rank applicants from certain regions lower because of historical inequities. Hiring tools can favour familiar schools or cities. Fraud detection may flag unusual users unfairly. These outcomes are not malicious—they are the product of optimisation without reflection. And because metrics look good, the deeper consequences often go unnoticed.
Regulation can help, but rules alone are not enough. The real problem is cultural. Organisations have normalised decisions without reflection, systems without empathy, and optimisation without ethics. Ethics is not a checklist—it is a practice. It is asking hard questions even when dashboards show “green.” It shows up in conversations, in analysts who question the data, in product leads who push back on deadlines, and in executives who insist on human judgment before rollout.
ALSO READ: How technostress is affecting Gen Z’s mental health
Leadership is key. Many panels talk about “ethical AI frameworks,” but what’s needed are ethical instincts. Culture matters. Teams must include people with diverse experiences who notice what others might miss. The best leaders create space for discomfort, because discomfort is where judgment lives—and judgment is where humanity begins.
Every line of code, every feature, and every decision in AI reflects a value or trade-off. There is no neutral AI. If we don’t consciously embed morality into these systems, we are leaving it out by design. AI is already shaping our lives more than we realise. The real question is whether we still have the courage and humility to stay human in the process.
We cannot simply ask if AI can make decisions; we must ask whether it should. Otherwise, we risk handing over not only decisions but our responsibility and conscience. The moment we focus only on what AI can do and ignore what it ought to do, we lose control of the very systems we built to serve us.
